Session List

Introduction to Data Analysis and Visualisation in Python

This is a virtual Data Carpentry Workshop offering a hands on introduction to the Python programming language. The course is aimed at postgraduate students and researchers who want to learn more about automation and improve the reproducibility of their research.

NB: Session runs parallel with Introduction to R.

Day 1 recordings:

View part 1 session recording here.

View part 2 session recording here.

View part 3 session recording here.

Day 2 recording:

View part 4 session recording here.

Introduction to Data Analysis and Visualisation in R

This is a virtual Data Carpentry Workshop offering a hands on introduction to the statistical programming language R. The course is aimed at postgraduate students and researchers who want to learn more about using R for their research.

NB: Session runs parallel with Introduction to Python.

View Day 1 session recording here.

View Day 2 session recording here.

MATLAB/Simulink Resources for Research – Information Session

MathWorks offers many online services, online resources, documentation, solutions and training (free of charge) along with MATLAB/Simulink for research. You will learn how to navigate MathWorks resources to find a solution to your specific research domain. You will know how you can acquire new research skills for tackling your research problems. MATLAB/Simulink will empower you with research outcomes and innovation.

Pre-requisites: Make your MathWorks Account with your university email address here

NB: Session time overlaps with Introduction to GADI.

View the session slides here.

View session recording here.

Computational Skills with MATLAB for Research

Develop new skills or extend your computational skills with MATLAB during your research. Join this session to discover how you can get started with MATLAB if you are new to MATLAB.

We will begin with data loading and visualisation and discuss various data types of MATLAB since data management is a key to the success of research outcomes and impacts. You will discover features of Tasks and APPs, which can generate a MATLAB code for you. Tasks and APPs make it easy to learn new computational skills. If you are a MATLAB Expert, check out new MATLAB features, such as Tasks, APPs and others, to accelerate the pace of research. Do not miss out on this session since we will take a quick introduction to APPs on Deep learning and Machine Learning as well.

Pre-requisites: Make your MathWorks Account with your university email address here

NB: Session time overlaps Introduction to GADI and Introduction to NCI Data Collections.

View the session resources here.

View session recording here.

Introduction to GADI

Gadi is Australia’s most powerful supercomputer, a highly parallel cluster comprising more than 150,000 processor cores on ten different types of compute nodes. Gadi accommodates a wide range of tasks, from running climate models to genome sequencing, from designing molecules to astrophysical modelling. Introduction to Gadi is designed for new users, or users that want a refresher on the basics of Gadi.

NB: Session time overlaps with MATLAB/Simulink Resources for Research and Computational Skills with MATLAB for Research.

Introduction to high performance data

Introduction to High Performance Data is designed for new users, or users that want a refresher on the basics of accessing and using high-performance scientific data at NCI. NCI is Australia’s premier supercomputing and big data facility, with over 10 Petabytes of nationally significant research data collections. We make analysis tools and virtual research environments for users to be able to interact with and manipulate their scientific data effectively.

NB: Session time overlaps with Computational Skills with MATLAB for Research.

NVivo Part 1

This workshop will take you through setting up a project in the qualitative software NVivo. In particular it covers how to import different types of materials into a project, including Word, PDF, audio, video and picture files saved on your computer; survey data; web content; emails; and references and PDFs from EndNote. It also covers how to store information about the cases and files in your project as case and file classifications.

NB: Session time overlaps with Getting Started with Using the (Nimbus Research) Cloud and A Brief Introduction to Qualtrics.

View session recording here.

Getting Started with Using the (Nimbus Research) Cloud

In this 90-minute webinar we introduce cloud computing and detail the Pawsey Supercomputing Centre’s cloud computing resource called the “Nimbus Research Cloud”.

You will have ample time for questions and if you’re interested, you can also access a test environment to experiment with what you can do with cloud computing.

NB: Session time overlaps with NVivo Part 1 and A Brief Introduction to Qualtrics.

A Brief Introduction to Qualtrics

Are you conducting research that requires participant recruitment? Conducting a survey or experimental research online? Then Qualtrics might be the platform for you. Qualtrics is an online survey software that can handle everything from simple questionnaires to detailed research projects. Design your survey with their intuitive drag-and-drop survey tool, powerful logic, and 100+ question types. This session is aimed at those who are new to using online survey platforms (or would like a refresher). Content covers creating a survey, selecting and setting up the most common question types, survey development (including survey flow and logic functions), and how to set up your data for export to excel and SPSS.

NB: Session time overlaps with NVivo Part 1 and Getting Started with Using the (Nimbus Research) Cloud.

View the Intro to Qualtrics manual here.

View session recording here.

Introduction to the command line (bash) and version control (git)

This is a virtual Software Carpentry Workshop offering a hands on introduction to the coding language bash and version control with git. The course is aimed at postgraduate students and researchers who want to learn more about automation and reproducibility of their research.

NB: Session time overlaps with Using the BCCVL to solve environmental problems and Active Data Management with CloudStor.

View session recording here.

Introduction to Julia

Learning a new programming language can be a substantial time investment. Given the emergence of different programming languages, it is often difficult to decide if learning a new language is worth the time investment. The aim of this talk is to share my experience in learning and using Julia for research from a practical perspective. The talk will cover some of the useful features and annoyances of Julia as well as their relations to other languages such as Python and R.

View session recording here.

View slides here.

View github repo here.

Automatic Python to Julia converter here.

Make your Data FAIR

What does a good data management strategy look like? Have you heard of the FAIR principles? Applying the FAIR principles to your research data management practice has many benefits, including increase visibility and citation of your research, attract new partnerships and collaborations, and improve reproducibility and reliability of your research. Come along to this workshop to learn about the FAIR principles, assess how FAIR your data is and identify practical steps to make your data more FAIR.

View session recording here.

View ARDC self-assessment tool here.

View Top 10 FAIR Data and Software Things here.

View Curtin Research Data Management Guide here.

Find a subject repository – Registry of research data repositories here.

Using the BCCVL to solve environmental problems

Species Distribution Models (SDM) can be used to understand the potential distribution of a species or the environmental suitability of habitat based on a set of environmental conditions as well as species occurrence data. In this workshop we address the fundamental aspects of SDMs, the type of data needed for these models, different types of algorithms and evaluation of models as well as an understanding of Climate Change projections to investigate how species distributions might change in the future under a certain carbon emission scenario.

We then introduce the workshop participants to the Biodiversity and Climate Change Virtual Laboratory (BCCVL), a freely available tool that supports researchers, students and decision-makers through easy access to pre-processed and curated data on global biodiversity, climate and environmental datasets integrated with a suite of analytical tools. BCCVL is a point-and-click online platform for modelling species responses to environmental conditions, which provides an easy introduction into the scientific concepts of models without the need for the user to understand the code behind the models.

The BCCVL encourages a high standard of species distribution modelling whilst removing the technical barriers that often come with running models. This approach allows researchers, decision makers and students to focus on conducting meaningful science and producing high impact results, instead of focussing too much on the technical side of things.

By the end of the workshop participants will have an overview on what SDMs are and how to use BCCVL in their own time to solve their specific environmental problem.

NB: Session time overlaps with Introduction to the command line (bash) and version control (git).

View session recording here.

View the video BCCVL animation here.

Visit bccvl (and then login or register) here.

Active Data Management with CloudStor

In this workshop we will be exploring and discussing methods for active data management. Participants will become familiar with the CloudStor interface and its associated tools and services for managing active research data. Learn how to organise, maintain, store and analyse active data, and understand safe and secure ways of sharing and storing data.

Topics such as cloud storage, collaborative editing, versioning and data sharing will be discussed and demonstrated. You will learn how to set up an automatic way to move your data to the cloud so that your data is safe and secure. You can also ensure that you are sharing your data securely by knowing what tools are best for this operation.

This is a hands-on workshop using CloudStor, AARNet’s file sync, share and storage service. Please check that you have access to cloudstor.aarnet.edu.au. If you do not have CloudStor access you can still attend, just advise the organiser beforehand so arrangements can be made: Sara.King@aarnet.edu.au.

NB: Session time overlaps with Introduction to the command line (bash) and version control (git).

R Markdown

During this session you'll get a brief introduction to R Markdown. We'll cover the basic building blocks of an R Markdown file and learn how to knit reports from an R Notebook using RStudio. The only prerequisite is having R and RStudio installed on your computer. The relevant R Markdown packages are installed with RStudio by default. You don't even need to know much R!

NB: Session time overlaps with Academic Career Planning.

View session recording here.

Fully reproducible R projects using workflowR

Have you started your reproducibility journey in R, and have made nicely reproducible markdown documents? Do you feel like you could go further? Then let me introduce workflowR for you, an R package that handles reproducibility for you. WorkflowR builds project structures for you into which you add markdown documents. With every build-step, workflowR will compile your markdown documents in a fresh environment, and will then build a website on Github or GitLab containing the entire analysis and past iterations of that analysis. All your results will be in one place, guaranteed to be reproducible by future you. In this workshop I walk you through the basics of the package and we’ll build a project website together. Recommended for people with a little bit of experience in git and R, ideally with a Github or GitLab account.

View session recording here.

Lunch and Learn - Teaching Online

In this 60-minute interactive session, we’ll share tips, tricks and gotcha’s. Whether you’re new at teaching online or have a seasoned online ‘presence’, join us. Together we’ll create a list of good practices that you can refer to over and over again, and that you can continue to build upon.

NB: Session time overlaps with Jupyter Notebooks for Absolute Beginners and NVivo Part 2.

Is your science causing your laptop to burn? Identifying when to scale your research

How do you know when your dataset becomes “too big”? How do you know when your laptop can no longer do what you need it to do for your research? When is it best to use cloud resources? When is it best to use supercomputing resources? When do I need both (or neither)?

The answers to these questions are project-dependent, and each project is different… but there are guidelines and best practices to help you decide.

During this interactive session, we will seek to answer these and associated questions. We’ll hear real stories from individuals who have had to navigate these same questions and who will be on hand to facilitate discussion, based on years of experience working with early, mid and late career researchers.

NB: Session time overlaps with Jupyter Notebooks for Absolute Beginners and NVivo Part 2.

Academic Career Planning

This workshop helps you to explore the elements you need to consider in establishing a career strategy when pursuing an academic career. Designing your career is a process, not something that will happen two weeks after submitting your thesis or completing your higher degree … so it is important you start thinking about this earlier and developing your job search strategy.

NB: Session time overlaps with Introduction to R Markdown.

View session recording here.

View slides here.

Jupyter Notebooks for Absolute Beginners

This will introduce you to Jupyter Notebooks, a digital tool that has exploded in popularity in recent years for those working with data. You will learn what they are, what they do and why you might like to use them. It is an introductory set of lessons for those who are brand new, have little or no knowledge of coding and computational methods in research. By the end of the workshop you will have a good understanding of what Notebooks can do, how to open one up, perform some basic tasks and save it for later. If you are really into it, you will also be able to continue to experiment after the workshop by using other people’s notebooks as springboards for your own adventures!

This workshop is targeted at those who are absolute beginners or ‘tech-curious’. It includes a hands-on component, using basic programming commands, but requires no previous knowledge of programming. Please check that you have access to cloudstor.aarnet.edu.au . If you do not have Cloudstor access you can still attend, just advise the trainer beforehand so arrangements can be made: Sara.King@aarnet.edu.au.

NB: Session time overlaps with Lunch and Learn - Teaching Online, Is your science causing your laptop to burn? and NVivo Part 2.

NVivo Part 2

This workshop provides an introduction to coding and searching in the qualitative software NVivo. In particular it covers how to organise project materials together according to specific themes and topics through the process of coding; how to use mind maps to visualise the layout of codes; and at how to conduct searches and create graphs (including by making use of attributes of case and file classifications).

NB: Session time overlaps with Is your science causing your laptop to burn? and Jupyter Notebooks for Absolute Beginners

View session recording here.

Product Tool Box: Web of Science

This webinar will showcase the Web of Science platform and cover aspects of the tools in Web of Science, EndNote, ResesearcherID, and ORCID. Following the session participants should be able to easily navigate the Web of Science platform.

Session time overlaps with Parallel Computing with MATLAB and AWS.

View session recording here.

Parallel computing with MATLAB and AWS

Are you tired with having your MATLAB code run slow? Do you need to scale out to the cloud to access big data? Then this presentation is for you as we’ll cover the tips and tricks to maximise the performance of your code first but then how to scale to leverage the full power of your computer and beyond into the cloud. Specifically, we’ll touch on multi-core and multi-thread computations, cloud scale to unlimited CPUs as well as working with data that is too large for the RAM on your machines to handle. Pre-requisites: no hard prereqs but some experience with MATLAB or Simulink is suggested.

NB: Session time overlaps with Product Tool Box: Web of Science.

View session recording here.

View session slides here.